17 research outputs found
Map Generation from Large Scale Incomplete and Inaccurate Data Labels
Accurately and globally mapping human infrastructure is an important and
challenging task with applications in routing, regulation compliance
monitoring, and natural disaster response management etc.. In this paper we
present progress in developing an algorithmic pipeline and distributed compute
system that automates the process of map creation using high resolution aerial
images. Unlike previous studies, most of which use datasets that are available
only in a few cities across the world, we utilizes publicly available imagery
and map data, both of which cover the contiguous United States (CONUS). We
approach the technical challenge of inaccurate and incomplete training data
adopting state-of-the-art convolutional neural network architectures such as
the U-Net and the CycleGAN to incrementally generate maps with increasingly
more accurate and more complete labels of man-made infrastructure such as roads
and houses. Since scaling the mapping task to CONUS calls for parallelization,
we then adopted an asynchronous distributed stochastic parallel gradient
descent training scheme to distribute the computational workload onto a cluster
of GPUs with nearly linear speed-up.Comment: This paper is accepted by KDD 202
Eficácia da auriculoterapia na dor no ombro - uma revisão integrativa
Objetivo: realizar uma revisão de estudos que mostrem os tratamentos de auriculoterapia para dor no ombro e protocolos relatados em livros. Método: foi utilizada a Biblioteca Virtual em Saúde acessando as bases de dados LILACS e SCIELO, com os descritores “auriculoterapia” e “ombro”, onde foram encontrados 5 estudos, publicados entre 2001 e 2015 e referências bibliográficas. Os critérios de inclusão foram estudos originais que abordassem sobre auriculoterapia e dor no ombro e livros que abordassem protocolos para dor no ombro. Os critérios de exclusão foram estudos de revisão de literatura. Após análise foi excluído 1 estudo, sendo utilizados 4 estudos, devido 1 estudo ser revisão de literatura. Também foram incluídos 5 livros, por apresentarem protocolos para dor no ombro. Resultados: na pesquisa realizada, os estudos demonstraram resultados positivos em relação à auriculoterapia para o ombro. Em um dos estudos verificou-se que a auriculoterapia utilizada juntamente com tuiná tem melhor eficácia. em outro estudo, viu-se melhores resultados com acupuntura. nos outros 2 estudos, que foi com pacientes com síndrome do ombro doloroso e LER/DORT, foi utilizada apenas auriculoterapia, com bons resultados para dor. Considerações finais: essa revisão nos faz refletir sobre a importância de mais publicações para avaliar a eficácia da auriculoterapia para essa patologia. A eficácia da terapia poderia diminuir uso de medicamentos e efeitos colaterais provenientes dos mesmos e gastos para o governo, já que é uma terapia de baixo custo. Também foram apresentados livros com protocolos para tratamento de patologias do ombro, o que pode auxiliar os profissionais nos tratamentos dos pacientes. Verificado que há diversos protocolos disponíveis diferentes, tanto nos livros quanto nos estudos avaliados
NASTransfer: Analyzing Architecture Transferability in Large Scale Neural Architecture Search
Neural Architecture Search (NAS) is an open and challenging problem in
machine learning. While NAS offers great promise, the prohibitive computational
demand of most of the existing NAS methods makes it difficult to directly
search the architectures on large-scale tasks. The typical way of conducting
large scale NAS is to search for an architectural building block on a small
dataset (either using a proxy set from the large dataset or a completely
different small scale dataset) and then transfer the block to a larger dataset.
Despite a number of recent results that show the promise of transfer from proxy
datasets, a comprehensive evaluation of different NAS methods studying the
impact of different source datasets has not yet been addressed. In this work,
we propose to analyze the architecture transferability of different NAS methods
by performing a series of experiments on large scale benchmarks such as
ImageNet1K and ImageNet22K. We find that: (i) The size and domain of the proxy
set does not seem to influence architecture performance on the target dataset.
On average, transfer performance of architectures searched using completely
different small datasets (e.g., CIFAR10) perform similarly to the architectures
searched directly on proxy target datasets. However, design of proxy sets has
considerable impact on rankings of different NAS methods. (ii) While different
NAS methods show similar performance on a source dataset (e.g., CIFAR10), they
significantly differ on the transfer performance to a large dataset (e.g.,
ImageNet1K). (iii) Even on large datasets, random sampling baseline is very
competitive, but the choice of the appropriate combination of proxy set and
search strategy can provide significant improvement over it. We believe that
our extensive empirical analysis will prove useful for future design of NAS
algorithms.Comment: 19 pages, 19 Figures, 6 Table
Abstract Checking Priority Queues
We describe a checker for priority queues. It supports the full repertoire of priority queue operations (insert, del~~in, find-m&, decrease-p, and del-item). It requires U ( 1) amortised time per operation and uses linear additional space (i.e. the same amouut as t.he priority queue). The checker reports an error occuring in operation i before operation i + CN + 1 is completed, where N ’ is the number of elements in the queue at the time the error occured and c < 1 is a constant. We show that an on-line checker, i.e., a checker that reports errors immediately, must have running time n(n log n) in the worst case for a sequence of n priority queue operations. This lower bound holds in the comparison model of computation.
Abstract Runtime Prediction of Real Programs on Real Machines*
Algorithms are more and more made available as part of libraries or tool kits. For a user of such a library statements of asymptotic running times are almost meaningless as he has no way to estimate the constants involved. To choose the right algorithm for the targeted problem size and the available portant. hardware, knowledge about these constants is im-Methods to determine the constants based on regression analysis or operation counting are not practicable in the general case due to inaccuracy and costs respectively. We present a new general method to determine the implementation and hardware specific running time constants for combinatorial algorithms. This method requires no changes of the implementation of the investigated algorithm and is applicable to a wide range of of programming languages. Only some additional code is necessary. The determined constants are correct within a constant factor which depends only on the hardware platform. As an example the constants of an implementation of a hierarchy of algorithms and data structures are determined. The hierarchy consists of an algorithm for the maximum weighted bipartite matching problem (MWBM), Dijkstra’s algorithm, a Fibonacci heap and a graph representation based on adjacency lists. The errors in the running time prediction of these algorithms using exact execution frequencies are at most 50 % on the tested hardware platforms.